Goto

Collaborating Authors

 Pacifica




Learning with Digital Agents: An Analysis based on the Activity Theory

Dolata, Mateusz, Katsiuba, Dzmitry, Wellnhammer, Natalie, Schwabe, Gerhard

arXiv.org Artificial Intelligence

Digital agents are considered a general-purpose technology. They spread quickly in private and organizational contexts, including education. Yet, research lacks a conceptual framing to describe interaction with such agents in a holistic manner. While focusing on the interaction with a pedagogical agent, i.e., a digital agent capable of natural-language interaction with a learner, we propose a model of learning activity based on activity theory. We use this model and a review of prior research on digital agents in education to analyze how various characteristics of the activity, including features of a pedagogical agent or learner, influence learning outcomes. The analysis leads to identification of IS research directions and guidance for developers of pedagogical agents and digital agents in general. We conclude by extending the activity theory-based model beyond the context of education and show how it helps designers and researchers ask the right questions when creating a digital agent.


InteraRec: Screenshot Based Recommendations Using Multimodal Large Language Models

Karra, Saketh Reddy, Tulabandhula, Theja

arXiv.org Artificial Intelligence

Weblogs, comprised of records detailing user activities on any website, offer valuable insights into user preferences, behavior, and interests. Numerous recommendation algorithms, employing strategies such as collaborative filtering, content-based filtering, and hybrid methods, leverage the data mined through these weblogs to provide personalized recommendations to users. Despite the abundance of information available in these weblogs, identifying and extracting pertinent information and key features from them necessitate extensive engineering endeavors. The intricate nature of the data also poses a challenge for interpretation, especially for non-experts. In this study, we introduce a sophisticated and interactive recommendation framework denoted as InteraRec, which diverges from conventional approaches that exclusively depend on weblogs for recommendation generation. InteraRec framework captures high-frequency screenshots of web pages as users navigate through a website. Leveraging state-of-the-art multimodal large language models (MLLMs), it extracts valuable insights into user preferences from these screenshots by generating a textual summary based on predefined keywords. Subsequently, an LLM-integrated optimization setup utilizes this summary to generate tailored recommendations. Through our experiments, we demonstrate the effectiveness of InteraRec in providing users with valuable and personalized offerings. Furthermore, we explore the integration of session-based recommendation systems into the InteraRec framework, aiming to enhance its overall performance. Finally, we curate a new dataset comprising of screenshots from product web pages on the Amazon website for the validation of the InteraRec framework. Detailed experiments demonstrate the efficacy of the InteraRec framework in delivering valuable and personalized recommendations tailored to individual user preferences.


Edge.org

#artificialintelligence

The conversation is on hold. The Edge community has hit the road... or they're staying home. Preparing for the academic year to begin, wrapping up projects and starting new ones, celebrating with family and friends or contemplating in solitude. After a hiatus, Edge is pleased to revive Summer Postcards: Edgies reporting in from wherever they are and on whatever they're doing, as the dog days wind out and the season comes to a close. As the world slowly returns to a "new normal" with enduring COVID restrictions in the midst of renewed vaccine freedoms, this year's collection is a testament to change (temporary and lasting), a consideration of loss (will travel ever be like it was?), and a celebration of questions (that still need answering). The hammock may be away until next year, but the memories remain. I spent the summer writing and revising the final section of a longish novel I started in 2019. It seems now as though I've been from 1946 to 2021 on my hands and knees. Various lockdowns have been a liberation from obligations and the luggage carousel, and I've never known such sweet and total focus for months on end. We have the luxury of living in the country--no shortage of big skies and moody walks. All our few breaks were in the UK--Scotland, the Lake District, the West country. Even in our remote part of the Lakes, I had to keep on writing--as in photo. The best novel I read this summer was Sandro Veronesi's The Hummingbird. Best non-fiction was Peter Godfrey Smith's Metazoa: Animal Life and the Birth of the Mind. I gave time also to some wonderful novellas--perfect fictional form for you too-busy scientists. IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. In 2019, Časlav Brukner and myself were walking on a beach on Lamma Island, near Hong Kong, marvelling together at the astonishing strangeness of quantum phenomena. This summer, the conversation with Časlav has continued on another island, and quite an island: Lesbos, the northern Greek island near the Turkish coast. Lesbos is the place where lyrical poetry was born. Here lived Sappho and Alcaeus.


The Rise of A.I. Fighter Pilots

The New Yorker

This content can also be viewed on the site it originates from. On a cloudless morning last May, a pilot took off from the Niagara Falls International Airport, heading for restricted military airspace over Lake Ontario. The plane, which bore the insignia of the United States Air Force, was a repurposed Czechoslovak jet, an L-39 Albatros, purchased by a private defense contractor. The bay in front of the cockpit was filled with sensors and computer processors that recorded the aircraft's performance. For two hours, the pilot flew counterclockwise around the lake.


Deep Learning for Everyone – and (Almost) Free

@machinelearnbot

Summary: The most important developments in Deep Learning and AI in the last year may not be technical at all, but rather a major change in business model. In the space of about six months all the majors have made their Deep Learning IP open source, hoping to gain on the competition from the power of the broader developer base and wide adoption. To say that the last year has been big for Deep Learning is an understatement. There have been some spectacular technical innovations like Microsoft winning the ImageNet competition with a neural net comprised of 152 layers (where 6 or 7 layers is more the norm). But the big action especially in the last six months has been in the business model for Deep Learning.


Now You Too Can Buy Cloud-Based Deep Learning

#artificialintelligence

Facebook's deep-learning artificial intelligence systems have learned to recognize your friends in your photos, and Google's AI has learned to anticipate what you'll be searching for. But there's no need to feel left out, even if your company's computers haven't learned much lately. A growing number of tech giants and startups have begun offering machine learning as a cloud service. That means other companies and startups do not need to develop their own specialized hardware or software to apply deep learning--the high-powered version du jour of machine learning--to their specific business needs. "Deep-learning algorithms dominate other machine-learning methods when data sets are large," says Zachary Chase Lipton, a deep-learning researcher in the Artificial Intelligence Group at the University of California, San Diego, who has examined cloud AI services from companies such as Amazon and IBM.


Deep Learning for Everyone – and (Almost) Free

@machinelearnbot

Summary: The most important developments in Deep Learning and AI in the last year may not be technical at all, but rather a major change in business model. In the space of about six months all the majors have made their Deep Learning IP open source, hoping to gain on the competition from the power of the broader developer base and wide adoption. To say that the last year has been big for Deep Learning is an understatement. There have been some spectacular technical innovations like Microsoft winning the ImageNet competition with a neural net comprised of 152 layers (where 6 or 7 layers is more the norm). But the big action especially in the last six months has been in the business model for Deep Learning.


Now You Too Can Buy Cloud-Based Deep Learning

#artificialintelligence

Facebook's deep-learning artificial intelligence systems have learned to recognize your friends in your photos, and Google's AI has learned to anticipate what you'll be searching for. But there's no need to feel left out, even if your company's computers haven't learned much lately. A growing number of tech giants and startups have begun offering machine learning as a cloud service. That means other companies and startups do not need to develop their own specialized hardware or software to apply deep learning--the high-powered version du jour of machine learning--to their specific business needs. "Deep-learning algorithms dominate other machine-learning methods when data sets are large," says Zachary Chase Lipton, a deep-learning researcher in the Artificial Intelligence Group at the University of California, San Diego, who has examined cloud AI services from companies such as Amazon and IBM.